737 research outputs found

    A Moral Framework for Understanding of Fair ML through Economic Models of Equality of Opportunity

    Full text link
    We map the recently proposed notions of algorithmic fairness to economic models of Equality of opportunity (EOP)---an extensively studied ideal of fairness in political philosophy. We formally show that through our conceptual mapping, many existing definition of algorithmic fairness, such as predictive value parity and equality of odds, can be interpreted as special cases of EOP. In this respect, our work serves as a unifying moral framework for understanding existing notions of algorithmic fairness. Most importantly, this framework allows us to explicitly spell out the moral assumptions underlying each notion of fairness, and interpret recent fairness impossibility results in a new light. Last but not least and inspired by luck egalitarian models of EOP, we propose a new family of measures for algorithmic fairness. We illustrate our proposal empirically and show that employing a measure of algorithmic (un)fairness when its underlying moral assumptions are not satisfied, can have devastating consequences for the disadvantaged group's welfare

    Choosing how to discriminate: navigating ethical trade-offs in fair algorithmic design for the insurance sector

    Full text link
    Here, we provide an ethical analysis of discrimination in private insurance to guide the application of non-discriminatory algorithms for risk prediction in the insurance context. This addresses the need for ethical guidance of data-science experts, business managers, and regulators, proposing a framework of moral reasoning behind the choice of fairness goals for prediction-based decisions in the insurance domain. The reference to private insurance as a business practice is essential in our approach, because the consequences of discrimination and predictive inaccuracy in underwriting are different from those of using predictive algorithms in other sectors (e.g., medical diagnosis, sentencing). Here we focus on the trade-off in the extent to which one can pursue indirect non-discrimination versus predictive accuracy. The moral assessment of this trade-off is related to the context of application—to the consequences of inaccurate risk predictions in the insurance domain

    Ethical and Unethical Hacking

    Get PDF
    The goal of this chapter is to provide a conceptual analysis of ethical, comprising history, common usage and the attempt to provide a systematic classification that is both compatible with common usage and normatively adequate. Subsequently, the article identifies a tension between common usage and a normativelyadequate nomenclature. ‘Ethical hackers’ are often identified with hackers that abide to a code of ethics privileging business-friendly values. However, there is no guarantee that respecting such values is always compatible with the all-things-considered morally best act. It is recognised, however, that in terms of assessment, it may be quite difficult to determine who is an ethical hacker in the ‘all things considered’ sense, while society may agree more easily on the determination of who is one in the ‘business-friendly’ limited sense. The article concludes by suggesting a pragmatic best-practice approach for characterising ethical hacking, which reaches beyond business-friendly values and helps in the taking of decisions that are respectful of the hackers’ individual ethics in morally debatable, grey zones

    How I Would have been Differently Treated: Discrimination Through the Lens of Counterfactual Fairness

    Full text link
    The widespread use of algorithms for prediction-based decisions urges us to consider the question of what it means for a given act or practice to be discriminatory. Building upon work by Kusner and colleagues in the field of machine learning, we propose a counterfactual condition as a necessary requirement on discrimination. To demonstrate the philosophical relevance of the proposed condition, we consider two prominent accounts of discrimination in the recent literature, by Lippert-Rasmussen and Hellman respectively, that do not logically imply our condition and show that they face important objections. Specifically, Lippert-Rasmussen’s definition proves to be over-inclusive, as it classifies some acts or practices as discriminatory when they are not, whereas Hellman’s account turns out to lack explanatory power precisely insofar as it does not countenance a counterfactual condition on discrimination. By defending the necessity of our counterfactual condition, we set the conceptual limits for justified claims about the occurrence of discriminatory acts or practices in society, with immediate applications to the ethics of algorithmic decision-making

    Distributive Justice as the Foundational Premise of Fair ML: Unification, Extension, and Interpretation of Group Fairness Metrics

    Full text link
    Group fairness metrics are an established way of assessing the fairness of prediction-based decision-making systems. However, these metrics are still insufficiently linked to philosophical theories, and their moral meaning is often unclear. In this paper, we propose a comprehensive framework for group fairness metrics, which links them to more theories of distributive justice. The different group fairness metrics differ in their choices about how to measure the benefit or harm of a decision for the affected individuals, and what moral claims to benefits are assumed. Our unifying framework reveals the normative choices associated with standard group fairness metrics and allows an interpretation of their moral substance. In addition, this broader view provides a structure for the expansion of standard fairness metrics that we find in the literature. This expansion allows addressing several criticisms of standard group fairness metrics, specifically: (1) they are parity-based, i.e., they demand some form of equality between groups, which may sometimes be detrimental to marginalized groups; (2) they only compare decisions across groups but not the resulting consequences for these groups; and (3) the full breadth of the distributive justice literature is not sufficiently represented

    People are not coins: a reply to Hedden

    Full text link
    This paper is a reply to "On Statistical Criteria of Algorithmic Fairness," by Brian Hedden. We question the significance of arguing that many group fairness criteria discussed in the machine learning literature are not necessary conditions for the fairness of predictions or decisions based on them. We show that it may be true, in general, that F is not a necessary condition for the fairness of all predictions (or decisions based on them). And yet, compatibly with this, most predictions or decisions involving people could be unfair if they violate the statistical fairness constraint F

    People are not coins: Morally distinct types of predictions necessitate different fairness constraints

    Full text link
    In a recent paper, Brian Hedden has argued that most of the group fairness constraints discussed in the machine learning literature are not necessary conditions for the fairness of predictions, and hence that there are no genuine fairness metrics. This is proven by discussing a special case of a fair prediction. In our paper, we show that Hedden's argument does not hold for the most common kind of predictions used in data science, which are about people and based on data from similar people; we call these “human-group-based practices.” We argue that there is a morally salient distinction between human-group-based practices and those that are based on data of only one person, which we call “human-individual-based practices.” Thus, what may be a necessary condition for the fairness of human-group-based practices may not be a necessary condition for the fairness of human-individual-based practices, on which Hedden's argument is based. Accordingly, the group fairness metrics discussed in the machine learning literature may still be relevant for most applications of prediction-based decision making
    corecore